1,588 research outputs found
An Intelligent QoS Identification for Untrustworthy Web Services Via Two-phase Neural Networks
QoS identification for untrustworthy Web services is critical in QoS
management in the service computing since the performance of untrustworthy Web
services may result in QoS downgrade. The key issue is to intelligently learn
the characteristics of trustworthy Web services from different QoS levels, then
to identify the untrustworthy ones according to the characteristics of QoS
metrics. As one of the intelligent identification approaches, deep neural
network has emerged as a powerful technique in recent years. In this paper, we
propose a novel two-phase neural network model to identify the untrustworthy
Web services. In the first phase, Web services are collected from the published
QoS dataset. Then, we design a feedforward neural network model to build the
classifier for Web services with different QoS levels. In the second phase, we
employ a probabilistic neural network (PNN) model to identify the untrustworthy
Web services from each classification. The experimental results show the
proposed approach has 90.5% identification ratio far higher than other
competing approaches.Comment: 8 pages, 5 figure
An Efficient Platform for Large-Scale MapReduce Processing
In this thesis we proposed and implemented the MMR, a new and open-source MapRe- duce model with MPI for parallel and distributed programing. MMR combines Pthreads, MPI and the Google\u27s MapReduce processing model to support multi-threaded as well as dis- tributed parallelism. Experiments show that our model signi cantly outperforms the leading open-source solution, Hadoop. It demonstrates linear scaling for CPU-intensive processing and even super-linear scaling for indexing-related workloads. In addition, we designed a MMR live DVD which facilitates the automatic installation and con guration of a Linux cluster with integrated MMR library which enables the development and execution of MMR applications
A Generalized Recurrent Neural Architecture for Text Classification with Multi-Task Learning
Multi-task learning leverages potential correlations among related tasks to
extract common features and yield performance gains. However, most previous
works only consider simple or weak interactions, thereby failing to model
complex correlations among three or more tasks. In this paper, we propose a
multi-task learning architecture with four types of recurrent neural layers to
fuse information across multiple related tasks. The architecture is
structurally flexible and considers various interactions among tasks, which can
be regarded as a generalized case of many previous works. Extensive experiments
on five benchmark datasets for text classification show that our model can
significantly improve performances of related tasks with additional information
from others
A Semi-Supervised Two-Stage Approach to Learning from Noisy Labels
The recent success of deep neural networks is powered in part by large-scale
well-labeled training data. However, it is a daunting task to laboriously
annotate an ImageNet-like dateset. On the contrary, it is fairly convenient,
fast, and cheap to collect training images from the Web along with their noisy
labels. This signifies the need of alternative approaches to training deep
neural networks using such noisy labels. Existing methods tackling this problem
either try to identify and correct the wrong labels or reweigh the data terms
in the loss function according to the inferred noisy rates. Both strategies
inevitably incur errors for some of the data points. In this paper, we contend
that it is actually better to ignore the labels of some of the data points than
to keep them if the labels are incorrect, especially when the noisy rate is
high. After all, the wrong labels could mislead a neural network to a bad local
optimum. We suggest a two-stage framework for the learning from noisy labels.
In the first stage, we identify a small portion of images from the noisy
training set of which the labels are correct with a high probability. The noisy
labels of the other images are ignored. In the second stage, we train a deep
neural network in a semi-supervised manner. This framework effectively takes
advantage of the whole training set and yet only a portion of its labels that
are most likely correct. Experiments on three datasets verify the effectiveness
of our approach especially when the noisy rate is high
- …